66 research outputs found

    Interactive Symptom Elicitation for Diagnostic Information Retrieval

    Get PDF
    Medical information retrieval suffers from a dual problem: users struggle in describing what they are experiencing from a medical perspective and the search engine is struggling in retrieving the information exactly matching what users are experiencing. We demonstrate interactive symptom elicitation for diagnostic information retrieval. Interactive symptom elicitation builds a model from the user's initial description of the symptoms and interactively elicitates new information about symptoms by posing questions of related, but uncertain, symptoms for the user. As a result, the system interactively learns the estimates of symptoms while controlling the uncertainties related to the diagnostic process. The learned model is then used to rank the associated diagnoses that the user might be experiencing. Our preliminary experimental results show that interactive symptom elicitation can significantly improve user's capability to describe their symptoms, increase the confidence of the model, and enable effective diagnostic information retrieval.Peer reviewe

    Methods and applications for ontology-based recommender systems

    Get PDF
    Recommender systems are a specific type of information filtering systems used to identify a set of objects that are relevant to a user. Instead of a user actively searching for information, recommender systems provide advice to users about objects they might wish to examine. Content-based recommender systems deal with problems related to analyzing the content, making heterogeneous content interoperable, and retrieving relevant content for the user. This thesis explores ontology-based methods to reduce these problems and to evaluate the applicability of the methods in recommender systems. First, the content analysis is improved by developing an automatic annotation method that produces structured ontology-based annotations from text. Second, an event-based method is developed to enable interoperability of heterogeneous content representations. Third, methods for semantic content retrieval are developed to determine relevant objects for the user. The methods are implemented as part of recommender systems in two cultural heritage information systems: CULTURESAMPO and SMARTMUSEUM. The performance of the methods were evaluated through user studies. The results can be divided into five parts. First, the results show improvement in automatic content analysis compared to state of the art methods and achieve performance close to human annotators. Second, the results show that the event-based method developed is suitable for bridging heterogeneous content representations. Third, the retrieval methods show accurate performance compared to user opinions. Fourth, semantic distance measures are compared to study the best query expansion strategy. Finally, practical solutions are developed to enable user profiling and result clustering. The results show that ontology-based methods enable interoperability of heterogeneous knowledge representations and result in accurate recommendations. The deployment of the methods to practical recommender systems show applicability of the results in real life settings

    Neuroadaptive modelling for generating images matching perceptual categories

    Get PDF
    Brain-computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant's brain signals as feedback to adapt a boundless generative model and generate new information matching the participant's intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user's intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator's perceptual categories.Peer reviewe

    Interactive faceted query suggestion for exploratory search : Whole-session effectiveness and interaction engagement

    Get PDF
    Abstract The outcome of exploratory information retrieval is not only dependent on the effectiveness of individual responses to a set of queries, but also on relevant information retrieved during the entire exploratory search session. We study the effect of search assistance, operationalized as an interactive faceted query suggestion, for both whole-session effectiveness and engagement through interactive faceted query suggestion. A user experiment is reported, where users performed exploratory search tasks, comparing interactive faceted query suggestion and a control condition with only conventional typed-query interaction. Data comprised of interaction and search logs show that the availability of interactive faceted query suggestion substantially improves whole-session effectiveness by increasing recall without sacrificing precision. The increased engagement with interactive faceted query suggestion is targeted to direct situated navigation around the initial query scope, but is not found to improve individual queries on average. The results imply that research in exploratory search should focus on measuring and designing tools that engage users with directed situated navigation support for improving whole-session performance.Peer reviewe

    Watching inside the Screen: Digital Activity Monitoring for Task Recognition and Proactive Information Retrieval

    Get PDF
    We investigate to what extent it is possible to infer a user’s work tasks by digital activity monitoring and use the task models for proactive information retrieval. Ten participants volunteered for the study, in which their computer screen was monitored and related logs were recorded for 14 days. Corresponding diary entries were collected to provide ground truth to the task detection method. We report two experiments using this data. The unsupervised task detection experiment was conducted to detect tasks using unsupervised topic modeling. The results show an average task detection accuracy of more than 70% by using rich screen monitoring data. The single-trial task detection and retrieval experiment utilized unseen user inputs in order to detect related work tasks and retrieve task-relevant information on-line. We report an average task detection accuracy of 95%, and the corresponding model-based document retrieval with Normalized Discounted Cumulative Gain of 98%. We discuss and provide insights regarding the types of digital tasks occurring in the data, the accuracy of task detection on different task types, and the role of using different data input such as application names, extracted keywords, and bag-of-words representations in the task detection process. We also discuss the implications of our results for ubiquitous user modeling and privacy.Peer reviewe

    Proactive Information Retrieval via Screen Surveillance

    Get PDF
    We demonstrate proactive information retrieval via screen surveillance. A user's digital activities are continuously monitored by capturing all content on a user's screen using optical character recognition. This includes all applications and services being exploited and relies on each individual user's computer usage, such as their Web browsing, emails, instant messaging, and word processing. Topic modeling is then applied to detect the user's topical activity context to retrieve information. We demonstrate a system that proactively retrieves information from a user's activity history being observed on the screen when the user is performing unseen activities on a personal computer. We report an evaluation with ten participants that shows high user satisfaction and retrieval effectiveness. Our demonstration and experimental results show that surveillance of a user's screen can be used to build an extremely rich model of a user's digital activities across application boundaries and enable effective proactive information retrieval.Peer reviewe

    Collaborative Filtering with Preferences Inferred from Brain Signals

    Get PDF
    Collaborative filtering is a common technique in which interaction data from a large number of users are used to recommend items to an individual that the individual may prefer but has not interacted with. Previous approaches have achieved this using a variety of behavioral signals, from dwell time and clickthrough rates to self-reported ratings. However, such signals are mere estimations of the real underlying preferences of the users. Here, we use brain-computer interfacing to infer preferences directly from the human brain. We then utilize these preferences in a collaborative filtering setting and report results from an experiment where brain inferred preferences are used in a neural collaborative filtering framework. Our results demonstrate, for the first time, that brain-computer interfacing can provide a viable alternative for behavioral and self-reported preferences in realistic recommendation scenarios. We also discuss the broader implications of our findings for personalization systems and user privacy.Peer reviewe
    • …
    corecore